618 research outputs found

    PYRO-NN: Python Reconstruction Operators in Neural Networks

    Full text link
    Purpose: Recently, several attempts were conducted to transfer deep learning to medical image reconstruction. An increasingly number of publications follow the concept of embedding the CT reconstruction as a known operator into a neural network. However, most of the approaches presented lack an efficient CT reconstruction framework fully integrated into deep learning environments. As a result, many approaches are forced to use workarounds for mathematically unambiguously solvable problems. Methods: PYRO-NN is a generalized framework to embed known operators into the prevalent deep learning framework Tensorflow. The current status includes state-of-the-art parallel-, fan- and cone-beam projectors and back-projectors accelerated with CUDA provided as Tensorflow layers. On top, the framework provides a high level Python API to conduct FBP and iterative reconstruction experiments with data from real CT systems. Results: The framework provides all necessary algorithms and tools to design end-to-end neural network pipelines with integrated CT reconstruction algorithms. The high level Python API allows a simple use of the layers as known from Tensorflow. To demonstrate the capabilities of the layers, the framework comes with three baseline experiments showing a cone-beam short scan FDK reconstruction, a CT reconstruction filter learning setup, and a TV regularized iterative reconstruction. All algorithms and tools are referenced to a scientific publication and are compared to existing non deep learning reconstruction frameworks. The framework is available as open-source software at \url{https://github.com/csyben/PYRO-NN}. Conclusions: PYRO-NN comes with the prevalent deep learning framework Tensorflow and allows to setup end-to-end trainable neural networks in the medical image reconstruction context. We believe that the framework will be a step towards reproducible researchComment: V1: Submitted to Medical Physics, 11 pages, 7 figure

    Dynamic Reconstruction with Statistical Ray Weighting for C-Arm CT Perfusion Imaging

    Get PDF
    Abstract—Tissue perfusion measurement using C-arm angiography systems is a novel technique with potential high benefit for catheter-guided treatment of stroke in the interventional suite. However, perfusion C-arm CT (PCCT) is challenging: the slow C-arm rotation speed only allows measuring samples of contrast time attenuation curves (TACs) every 5 – 6 s if reconstruction algorithms for static data are used. Furthermore, the peaks of the tissue TACs typically lie in a range of 5 – 30 HU, thus perfusion imaging is very sensitive to noise. Recently we presented a dynamic, iterative reconstruction (DIR) approach to reconstruct TACs described by a weighted sum of linear spline functions with a regularization based on joint bilateral filtering (JBF). In this work we incorporate statistical ray weighting into the algorithm and show how this helps to improve the reconstructed cerebral blood flow (CBF) maps in a simulation study with a realistic dynamic brain phantom. The Pearson correlation of the CBF maps to ground truth maps increases from 0.85 (FDK), 0.87 (FDK with JBF), and 0.90 (DIR with JBF) to 0.92 (DIR with JBF and ray weighting). The results suggest that the statistical ray weighting approach improves the diagnostic accuracy of PCCT based on DIR. I

    Deep OCT Angiography Image Generation for Motion Artifact Suppression

    Full text link
    Eye movements, blinking and other motion during the acquisition of optical coherence tomography (OCT) can lead to artifacts, when processed to OCT angiography (OCTA) images. Affected scans emerge as high intensity (white) or missing (black) regions, resulting in lost information. The aim of this research is to fill these gaps using a deep generative model for OCT to OCTA image translation relying on a single intact OCT scan. Therefore, a U-Net is trained to extract the angiographic information from OCT patches. At inference, a detection algorithm finds outlier OCTA scans based on their surroundings, which are then replaced by the trained network. We show that generative models can augment the missing scans. The augmented volumes could then be used for 3-D segmentation or increase the diagnostic value.Comment: Accepted at BVM 202

    KidNet: An Automated Framework for Renal Lesions Detection and Segmentation in CT Images

    Get PDF
    Renal lesions segmentation and morphological assessment are essential for improving diagnosis and our understanding of renal cancer, which in turn is imperative for reducing the risk of mortality and morbidity in patients. In this paper, we propose an automatic image based method to first detect kidneys in CT images and then segment both kidneys and lesions in higher resolution. Kidneys are detected using an encoder-decoder method trained on low-resolution images. Based on probability maps generated by detector model, we can identify corresponding kidney regions and segment both kidneys and lesions in higher resolution with reducing the false positive voxels. We evaluate our approach on KITS 2019 challenge data set and demonstrate that our proposed method generalizes to unseen clinical CTs of the abdominal

    Fast and robust detection of solar modules in electroluminescence images

    Full text link
    Fast, non-destructive and on-site quality control tools, mainly high sensitive imaging techniques, are important to assess the reliability of photovoltaic plants. To minimize the risk of further damages and electrical yield losses, electroluminescence (EL) imaging is used to detect local defects in an early stage, which might cause future electric losses. For an automated defect recognition on EL measurements, a robust detection and rectification of modules, as well as an optional segmentation into cells is required. This paper introduces a method to detect solar modules and crossing points between solar cells in EL images. We only require 1-D image statistics for the detection, resulting in an approach that is computationally efficient. In addition, the method is able to detect the modules under perspective distortion and in scenarios, where multiple modules are visible in the image. We compare our method to the state of the art and show that it is superior in presence of perspective distortion while the performance on images, where the module is roughly coplanar to the detector, is similar to the reference method. Finally, we show that we greatly improve in terms of computational time in comparison to the reference method

    Hybrid adiabatic quantum computing for tomographic image reconstruction -- opportunities and limitations

    Full text link
    Our goal is to reconstruct tomographic images with few measurements and a low signal-to-noise ratio. In clinical imaging, this helps to improve patient comfort and reduce radiation exposure. As quantum computing advances, we propose to use an adiabatic quantum computer and associated hybrid methods to solve the reconstruction problem. Tomographic reconstruction is an ill-posed inverse problem. We test our reconstruction technique for image size, noise content, and underdetermination of the measured projection data. We then present the reconstructed binary and integer-valued images of up to 32 by 32 pixels. The demonstrated method competes with traditional reconstruction algorithms and is superior in terms of robustness to noise and reconstructions from few projections. We postulate that hybrid quantum computing will soon reach maturity for real applications in tomographic reconstruction. Finally, we point out the current limitations regarding the problem size and interpretability of the algorithm

    First identification of large electric monopole strength in well-deformed rare earth nuclei

    Get PDF
    Excited states in the well-deformed rare earth isotopes 154^{154}Sm and 166^{166}Er were populated via ``safe'' Coulomb excitation at the Munich MLL Tandem accelerator. Conversion electrons were registered in a cooled Si(Li) detector in conjunction with a magnetic transport and filter system, the Mini-Orange spectrometer. For the first excited 0+0^+ state in 154^{154}Sm at 1099 keV a large value of the monopole strength for the transition to the ground state of ρ2(E0;02+0g+)=96(42)103\rho^2(\text{E0}; 0^+_2 \to 0^+_\text{g}) = 96(42)\cdot 10^{-3} could be extracted. This confirms the interpretation of the lowest excited 0+0^+ state in 154^{154}Sm as the collective β\beta-vibrational excitation of the ground state. In 166^{166}Er the measured large electric monopole strength of ρ2(E0;04+01+)=127(60)103\rho^2(\text{E0}; 0^+_4 \to 0^+_1) = 127(60)\cdot 10^{-3} clearly identifies the 04+0_4^+ state at 1934 keV to be the β\beta-vibrational excitation of the ground state.Comment: submitted to Physics Letters

    Modern machine-learning can support diagnostic differentiation of central and peripheral acute vestibular disorders

    Get PDF
    BACKGROUND Diagnostic classification of central vs. peripheral etiologies in acute vestibular disorders remains a challenge in the emergency setting. Novel machine-learning methods may help to support diagnostic decisions. In the current study, we tested the performance of standard and machine-learning approaches in the classification of consecutive patients with acute central or peripheral vestibular disorders. METHODS 40 Patients with vestibular stroke (19 with and 21 without acute vestibular syndrome (AVS), defined by the presence of spontaneous nystagmus) and 68 patients with peripheral AVS due to vestibular neuritis were recruited in the emergency department, in the context of the prospective EMVERT trial (EMergency VERTigo). All patients received a standardized neuro-otological examination including videooculography and posturography in the acute symptomatic stage and an MRI within 7~days after symptom onset. Diagnostic performance of state-of-the-art scores, such as HINTS (Head Impulse, gaze-evoked Nystagmus, Test of Skew) and ABCD2 (Age, Blood, Clinical features, Duration, Diabetes), for the differentiation of vestibular stroke vs. peripheral AVS was compared to various machine-learning approaches: (i) linear logistic regression (LR), (ii) non-linear random forest (RF), (iii) artificial neural network, and (iv) geometric deep learning (Single/MultiGMC). A prospective classification was simulated by ten-fold cross-validation. We analyzed whether machine-estimated feature importances correlate with clinical experience. RESULTS Machine-learning methods (e.g., MultiGMC) outperform univariate scores, such as HINTS or ABCD2, for differentiation of all vestibular strokes vs. peripheral AVS (MultiGMC area-under-the-curve (AUC): 0.96 vs. HINTS/ABCD2 AUC: 0.71/0.58). HINTS performed similarly to MultiGMC for vestibular stroke with AVS (AUC: 0.86), but more poorly for vestibular stroke without AVS (AUC: 0.54). Machine-learning models learn to put different weights on particular features, each of which is relevant from a clinical viewpoint. Established non-linear machine-learning methods like RF and linear methods like LR are less powerful classification models (AUC: 0.89 vs. 0.62). CONCLUSIONS Established clinical scores (such as HINTS) provide a valuable baseline assessment for stroke detection in acute vestibular syndromes. In addition, machine-learning methods may have the potential to increase sensitivity and selectivity in the establishment of a correct diagnosis
    corecore